111 research outputs found

    Statistical Process Control for Software: Fill the Gap

    Get PDF
    The characteristic of software processes, unlike manufacturing ones, is that they have a very high human-centered component and are primarily based on cognitive activities. As so, each time a software process is executed, inputs and outputs may vary, as well as the process performances. This phenomena is better identified in literature with the terminology of “Process Diversity” (IEEE, 2000). Given the characteristics of a software process, its intrinsic diversity implies the difficulty to predict, monitor and improve it, unlike what happens in other contexts. In spite of the previous observations, Software Process Improvement (SPI) is a very important activity that cannot be neglected. To face these problems, the software engineering community stresses the use of measurement based approaches such as QIP/GQM (Basili et al., 1994) and time series analysis: the first approach is usually used to determine what improvement is needed; the time series analysis is adopted to monitor process performances. As so, it supports decision making in terms of when the process should be improved, and provides a manner to verify the effectiveness of the improvement itself. A technique for time series analysis, well-established in literature, which has given insightful results in the manufacturing contexts, although not yet in software process ones is known as Statistical Process Control (SPC) (Shewhart, 1980; Shewhart, 1986). The technique was originally developed by Shewhart in the 1920s and then used in many other contexts. The basic idea it relies on consists in the use of so called “control charts” together with their indicators, called run tests, to: establish operational limits for acceptable process variation; monitor and evaluate process performances evolution in time. In general, process performance variations are mainly due to two types of causes classified as follows:  Common cause variations: the result of normal interactions of people, machines, environment, techniques used and so on.  Assignable cause variations: arise from events that are not part of the process and make it unstable. In this sense, the statistically based approach, SPC, helps determine if a process is stable or not by discriminating between common cause variation and assignable cause variation. We can classify a process as “stable” or “under control” if only common causes occur. More precisely, in SPC data points representing measures of process performances are collected. These values are then compared to the values of central tendency, upper and lower limit of admissible performance variations. While SPC is a well established technique in manufacturing contexts, there are only few works in literature (Card, 1994; Florac et al., 2000; Weller, 2000(a); Weller, 2000(b); Florence, 2001; Sargut & Demirors, 2006; Weller, & Card. 2008; Raczynski & Curtis, 2008) that present successful outcomes of SPC adoption to software. In each case, not only are there few cases of successful applications but they don’t clearly illustrate the meaning of control charts and related indicators in the context of software process application. Given the above considerations, the aim of this work is to generalize and put together the experiences collected by the authors in previous studies on the use of Statistical Process Control in the software context (Baldassarre et al, 2004; Baldassarre et al, 2005; Caivano 2005; Boffoli, 2006; Baldassarre et al, 2008; Baldassarre et al, 2009) and present the resulting stepwise approach that: starting from stability tests, known in literature, selects the most suitable ones for software processes (tests set), reinterprets them from a software process perspective (tests interpretation) and suggest a recalculation strategy for tuning the SPC control limits. The paper is organized as follows: section 2 briefly presents SPC concepts and its peculiarities; section 3 discusses the main differences and lacks of SPC for software and presents the approach proposed by the authors; finally, in section 4 conclusions are drawn

    Towards Knowledge Based Risk Management Approach in Software Projects

    Get PDF
    All projects involve risk; a zero risk project is not worth pursuing. Furthermore, due to software project uniqueness, uncertainty about final results will always accompany software development. While risks cannot be removed from software development, software engineers instead, should learn to manage them better (Arshad et al., 2009; Batista Webster et al., 2005; Gilliam, 2004). Risk Management and Planning requires organization experience, as it is strongly centred in both experience and knowledge acquired in former projects. The larger experience of the project manager improves his ability in identifying risks, estimating their occurrence likelihood and impact, and defining appropriate risk response plan. Thus risk knowledge cannot remain in an individual dimension, rather it must be made available for the organization that needs it to learn and enhance its performances in facing risks. If this does not occur, project managers can inadvertently repeat past mistakes simply because they do not know or do not remember the mitigation actions successfully applied in the past or they are unable to foresee the risks caused by certain project restrictions and characteristics. Risk knowledge has to be packaged and stored over time throughout project execution for future reuse. Risk management methodologies are usually based on the use of questionnaires for risk identification and templates for investigating critical issues. Such artefacts are not often related each other and thus usually there is no documented cause-effect relation between issues, risks and mitigation actions. Furthermore today methodologies do not explicitly take in to account the need to collect experience systematically in order to reuse it in future projects. To convey these problems, this work proposes a framework based on the Experience Factory Organization (EFO) model (Basili et al., 1994; Basili et al., 2007; Schneider & Hunnius, 2003) and then use of Quality Improvement Paradigm (QIP) (Basili, 1989). The framework is also specialized within one of the largest firms of current Italian Software Market. For privacy reasons, and from here on, we will refer to it as “FIRM”. Finally in order to quantitatively evaluate the proposal, two empirical investigations were carried out: a post-mortem analysis and a case study. Both empirical investigations were carried out in the FIRM context and involve legacy systems transformation projects. The first empirical investigation involved 7 already executed projects while the second one 5 in itinere projects. The research questions we ask are: Does the proposed knowledge based framework lead to a more effective risk management than the one obtained without using it? Does the proposed knowledge based framework lead to a more precise risk management than the one obtained without using it? The rest of the paper is organized as follows: section 2 provides a brief overview of the main research activities presented in literature dealing with the same topics; section 3 presents the proposed framework, while section 4 its specialization in the FIRM context; section 5 describes empirical studies we executed, results and discussions are presented in section 6. Finally, conclusions are drawn in section 7

    Cybersecurity: breve introduzione a una questione tecnica e politica fondamentale

    Get PDF
    L'articolo è mirato a inquadrare la cyber security come uno dei problemi fondamentali della politica sia domestica che sul piano delle relazioni internazionali. La chiave di lettura è il concetto di egemonia in Antonio Gramsci

    On Internet-of-Things Devices in Ambient Assisted Living Solutions

    Get PDF
    In this paper, we present the results of a Rapid Review (RR) of Internet-of-Things (IoT) devices that have been using in AAL solutions for elderly people. In that respect, our literature review is born from the need of delivering evidence to the stakeholders that are involved in the project in which this RR has been conducted. Nevertheless, the obtained results can be of interest to software engineers who want to know which IoT devices have been using in AAL solutions for elderly people to support decision-making in the development of these solutions. The findings of our RR emerge from 61 papers and can be summarized as follows: (i) a number of IoT devices are used in AAL solutions for elderly people; (ii) most IoT devices do not explicitly focus on specific diseases; and (iii) IoT devices support several needs

    A Tool for Improving Privacy in Software Development

    Get PDF
    Privacy is considered a necessary requirement for software development. It is necessary to understand how certain software vulnerabilities can create problems for organizations and individuals. In this context, privacy-oriented software development plays a primary role to reduce some problems that can arise simply from individuals’ interactions software applications, even when the data being processed is not directly linked to identifiable. The loss of confidentiality, integrity, or availability at some point in the data processing, such as data theft by external attackers or the unauthorized access or use of data by employees., represent some types of cybersecurity-related privacy events. Therefore, this research work discusses the formalization of 5 key privacy elements (Privacy by Design Principles, Privacy Design Strategies, Privacy Pattern, Vulnerabilities and Context) in software development and presents a privacy tool that supports developers’ decisions to integrate privacy and security requirements in all software development phases

    software product lines in value based software engineering

    Get PDF
    Objective: Evaluate the value of a product line in terms of maintainability, extensibility and configurability with refer to the interested stakeholders: customers, maintainers, producers. Rationale: There are values that customers constantly require in a modern software application. Some of these values are supported by product lines. Nevertheless, in the industrial and scientific communities the conjecture that customer values clash with those of producers/maintainers is diffused. Design of Study: we have designed and carried out a case study in an industrial context on an ongoing project to verify the validity of a product line in creating value for stakeholders. So data was collected as the project was being executed along a nine month period. Then, descriptive statistics and hypothesis testing were carried out. Results: experience acquired during the execution of an industrial project has allowed the authors to point out the differences between program families and software product lines. Also, the case study has shown how product lines contribute to stakeholder value proposition elicitation and reconciliation. Conclusions: This study has represented a first step towards analyzing the value that product lines represent for various stakeholders

    Human Factors in Software Development Processes: Measuring System Quality

    Get PDF
    Software Engineering and Human-Computer Interaction look at the development process from different perspectives. They apparently use very different approaches, are inspired by different principles and address different needs. But, they definitively have the same goal: develop high quality software in the most effective way. The second edition of the workshop puts particular attention on efforts of the two communities in enhancing system quality. The research question discussed is: who, what, where, when, why, and how should we evaluate?
    • …
    corecore